翻訳と辞書
Words near each other
・ Mats Hinze
・ Mats Hummels
・ Matrix of country subdivisions
・ Matrix of domination
・ Matrix of Leadership
・ Matrix of ones
・ Matrix of pain
・ Matrix Partners
・ Matrix pencil
・ Matrix planting
・ Matrix polynomial
・ Matrix population models
・ Matrix Powertag
・ Matrix product state
・ Matrix Quest
Matrix regularization
・ Matrix representation
・ Matrix representation of conic sections
・ Matrix representation of Maxwell's equations
・ Matrix Requirements Medical
・ Matrix ring
・ Matrix scheme
・ Matrix similarity
・ Matrix Software
・ Matrix splitting
・ Matrix string theory
・ Matrix t-distribution
・ Matrix Template Library
・ Matrix theory (physics)
・ Matrix Toolkit Java


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Matrix regularization : ウィキペディア英語版
Matrix regularization
In the field of statistical learning theory, matrix regularization generalizes notions of vector regularization to cases where the object to be learned is a matrix. The purpose of regularization is to enforce conditions, for example sparsity or smoothness, that can produce stable predictive functions. For example, in the more common vector framework, Tikhonov regularization optimizes over
: \min_x \|Ax-y\|^2 + \lambda \|x\|^2
to find a vector, x, that is a stable solution to the regression problem. When the system is described by a matrix rather than a vector, this problem can be written as
: \min_X \|AX-Y\|^2 + \lambda \|X\|^2
where the vector norm enforcing a regularization penalty on x has been extended to a matrix norm on X.
Matrix Regularization has applications in matrix completion, multivariate regression, and multi-task learning. Ideas of feature and group selection can also be extended to matrices, and these can be generalized to the nonparametric case of multiple kernel learning.
== Basic definition ==

Consider a matrix W to be learned from a set of examples, S=(X_i^t,y_i^t), where i goes from 1 to n, and t goes from 1 to T. Let each input matrix X_i be \in \mathbb^, and let W be of size D\times T. A general model for the output y can be posed as
: y_i^t=\langle W,X_i^t\rangle_F
where the inner product is the Frobenius inner product. For different applications the matrices X_i will have different forms,〔Lorenzo Rosasco, Tomaso Poggio, "A Regularization Tour of Machine Learning — MIT-9.520 Lectures Notes" Manuscript, Dec. 2014.〕 but for each of these the optimization problem to infer W can be written as
: \min_, with Forbenius inner product,.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Matrix regularization」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.